10 research outputs found

    Joint Blind Motion Deblurring and Depth Estimation of Light Field

    Full text link
    Removing camera motion blur from a single light field is a challenging task since it is highly ill-posed inverse problem. The problem becomes even worse when blur kernel varies spatially due to scene depth variation and high-order camera motion. In this paper, we propose a novel algorithm to estimate all blur model variables jointly, including latent sub-aperture image, camera motion, and scene depth from the blurred 4D light field. Exploiting multi-view nature of a light field relieves the inverse property of the optimization by utilizing strong depth cues and multi-view blur observation. The proposed joint estimation achieves high quality light field deblurring and depth estimation simultaneously under arbitrary 6-DOF camera motion and unconstrained scene depth. Intensive experiment on real and synthetic blurred light field confirms that the proposed algorithm outperforms the state-of-the-art light field deblurring and depth estimation methods

    Floating Textures

    No full text

    Coded Aperture Flow

    Get PDF
    Real cameras have a limited depth of field. The resulting defocus blur is a valuable cue for estimating the depth structure of a scene. Using coded apertures, depth can be estimated from a single frame. For optical flow estimation between frames, however, the depth dependent degradation can introduce errors. These errors are most prominent when objects move relative to the focal plane of the camera. We incorporate coded aperture defocus blur into optical flow estimation and allow for piecewise smooth 3D motion of objects. With coded aperture flow, we can establish dense correspondences between pixels in succeeding coded aperture frames. We compare several approaches to compute accurate correspondences for coded aperture images showing objects with arbitrary 3D motion

    (Guest Editors) Floating Textures

    No full text
    We present a novel multi-view, projective texture mapping technique. While previous multi-view texturing approaches lead to blurring and ghosting artefacts if 3D geometry and/or camera calibration are imprecise, we propose a texturing algorithm that warps (“floats”) projected textures during run-time to preserve crisp, detailed texture appearance. Our GPU implementation achieves interactive to real-time frame rates. The method is very generally applicable and can be used in combination with many image-based rendering methods or projective texturing applications. By using Floating Textures in conjunction with, e.g., visual hull rendering, light field rendering, or free-viewpoint video, improved rendering results are obtained from fewer input images, less accurately calibrated cameras, and coarser 3D geometry proxies. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Picture/Image Generation I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism 1

    Franval y Emilia

    No full text
    SegĂșn Herrera y Navarro, esta obra es una traducciĂłn de "Les Tombeaux de VĂ©rone" de Mercier, a travĂ©s de "Dorvil", de Francesco AlbergatiPrecede a tit.: "Num. 151"SegĂșn el CERL Thesaurus, entre 1785 y 1815 Juan Sellent fue administrador de la librerĂ­a de la familia PiferrerSign.: A-C4, D

    Modeling Blurred Video with Layers

    No full text
    Videos contain complex spatially-varying motion blur due to the combination of object motion, camera motion, and depth variation with finite shutter speeds. Existing methods to estimate optical flow, deblur the images, and segment the scene fail in such cases. In particular, boundaries between differently moving objects cause problems, because here the blurred images are a combination of the blurred appearances of multiple surfaces. We address this with a novel layered model of scenes in motion. From a motion-blurred video sequence, we jointly estimate the layer segmentation and each layer's appearance and motion. Since the blur is a function of the layer motion and segmentation, it is completely determined by our generative model. Given a video, we formulate the optimization problem as minimizing the pixel error between the blurred frames and images synthesized from the model, and solve it using gradient descent. We demonstrate our approach on synthetic and real sequences

    Two Algorithms for Motion Estimation from Alternate Exposure Images

    No full text
    Abstract. Most algorithms for dense 2D motion estimation assume pairs of images that are acquired with an idealized, infinitively short exposure time. In this work we compare two approaches that use an additional, motion-blurred image of a scene to estimate highly accurate, dense correspondence fields. We consider video sequences that are acquired with alternating exposure times so that a short-exposure image is followed by a long-exposure image that exhibits motion-blur. For both motion estimation algorithms we employ an image formation model that relates the motion blurred image to two enframing short-exposure images. With this model we can decipher the motion information encoded in the long-exposure image, but also estimate occlusion timings which are a prerequisite for artifact-free frame interpolation. The first approach solves for the motion in a pointwise least squares formulation while the second formulates a global, total variation regularized problem. Both approaches are evaluated in detail and compared to each other and state-of-the-art motion estimation algorithms
    corecore